Go to Admin » Appearance » Widgets » and move Gabfire Widget: Social into that MastheadOverlay zone
The views expressed are those of the author and do not necessarily reflect the views of ASPA as an organization.
By Ygnacio Flores, Don Mason & Tracy Rickman
August 9, 2024
In 2020, the City of Santa Cruz, California, became the first city in the United States to ban predictive policing. The ban came a decade after Santa Cruz was one of the first cities to adopt predictive policing in the hopes of using technology to be a step ahead of crime. In addition to predictive policing, the issue of facial recognition did not deliver unbiased policing. The City of San Francisco, California, banned facial recognition technology in 2019.
Banning these technologies was not isolated as many cities tried to balance safety and security with Constitutional Rights. In an eagerness to wield the benefits of technology, police departments mostly worked independently with technology providers in unilateral agreements to use untested technology in policing, leading to some police departments facing litigation over predictive policing programs.
More than 3,000 police departments worldwide already use AI image recognition technology to recognize vehicle makes, models and license plates. This technology includes license plate readers (LPR) or automatic license plate readers (ALPR). In two separate instances in 2023, AI led to the apprehension of a mass murder suspect and the rescue of 80 human trafficking victims.
A challenge for policing in its struggle to ethically use Artificial intelligence (AI) is that policing in the United States needs to be more robust by the structure of policing itself. In the United States, policing across the nation is decentralized, with multiple organizations that are uncoordinated. Policing, in the context of this article, refers to state police, municipal police and sheriff deputies, not federal law enforcement agencies. Police and sheriff departments operate at the municipal level through city councils or boards of supervisors.
Artificial Intelligence refers to the intelligence exhibited by machines, including:
For those new to AI, it is easy to be overwhelmed by what computers can do. However, AI still needs to deliver the promises of perfect performance. Those using AI in policing must ensure that learning models include supervised learning based on ethically labeled data used in AI program training. When using LLMs, supervised learning can control how AI develops the next token—response—in making decisions. In this sequence of AI learning, it is essential to pre-train the computer with the proper weights to reflect what a community will expect of police in a community. Together, these steps will allow AI to think as close to a human serving a community as a police officer. Many professionals and academicians are concerned about when AI reaches a point of singularity—where AI equals or surpasses human intelligence.
A challenge with law enforcement expanding the use of AI is that no coordinated research and development department serves agencies nationwide. The ideas of home rule and independent governance require that law enforcement look to a national structure responsible for safety and security along with the financial structure to support research and development. The Department of Defense—the military—has resources, time and the depth of research and development to test AI systems. Testing in the military is primarily done through simulation exercises—better known as wargaming. In the quest for creating the best decision-making models, the military discovered it could not trust AI to make the best decisions in situations where AI can use force as a remedy. Computers think differently than humans and are limited to the ethics bound by proverbial zeros and ones. Recommendations for controlling extreme decisions resulting in the use of governmental force must include a human as a go/no-go gauge.
Developing AI requires powerful computers. Working with powerful computers is outside the budget of many cities or counties. A prudent move is for the federal or state Departments of Justice to take the lead needed for the depth of research and development required to file AI systems that meet a community’s ethical and moral needs. Inadequate funding or working with subpar data in training AI can result in the substandard performance of AI when policing a community. Poor development in the learning processes of AI can teach programs to be biased or cheat to be effective. Relying on commercial off-the-shelf technology needs to be avoided. Pursuing AI in policing needs to be done right, not right away. With proper investment from top-tier governments, advanced LLMs can benefit from multimodal capabilities that teach computers what a police officer faces daily in a community.
AI needs a human element to meet the needs of a community of humans. AI can enhance, but never replace, human judgment. AI should be just one tool of several that the police will have at their disposal. Controlling for AI going rogue requires audits where a human checks the outcomes of scenarios used in training through Reinforcement Learning from Human Feedback (RLHF). Just like training a dog, AI can learn the proper behaviors through RLHF.
Author: Dr. Ygnacio “Nash” Flores and Don Mason are faculty at Rio Hondo College. Dr. Tracy Rickman is faculty at Tarleton State University.
Follow Us!